5 research outputs found
Be Selfish and Avoid Dilemmas: Fork After Withholding (FAW) Attacks on Bitcoin
In the Bitcoin system, participants are rewarded for solving cryptographic
puzzles. In order to receive more consistent rewards over time, some
participants organize mining pools and split the rewards from the pool in
proportion to each participant's contribution. However, several attacks
threaten the ability to participate in pools. The block withholding (BWH)
attack makes the pool reward system unfair by letting malicious participants
receive unearned wages while only pretending to contribute work. When two pools
launch BWH attacks against each other, they encounter the miner's dilemma: in a
Nash equilibrium, the revenue of both pools is diminished. In another attack
called selfish mining, an attacker can unfairly earn extra rewards by
deliberately generating forks. In this paper, we propose a novel attack called
a fork after withholding (FAW) attack. FAW is not just another attack. The
reward for an FAW attacker is always equal to or greater than that for a BWH
attacker, and it is usable up to four times more often per pool than in BWH
attack. When considering multiple pools - the current state of the Bitcoin
network - the extra reward for an FAW attack is about 56% more than that for a
BWH attack. Furthermore, when two pools execute FAW attacks on each other, the
miner's dilemma may not hold: under certain circumstances, the larger pool can
consistently win. More importantly, an FAW attack, while using intentional
forks, does not suffer from practicality issues, unlike selfish mining. We also
discuss partial countermeasures against the FAW attack, but finding a cheap and
efficient countermeasure remains an open problem. As a result, we expect to see
FAW attacks among mining pools.Comment: This paper is an extended version of a paper accepted to ACM CCS 201
Trick or Heat? Manipulating Critical Temperature-Based Control Systems Using Rectification Attacks
Temperature sensing and control systems are widely used in the closed-loop
control of critical processes such as maintaining the thermal stability of
patients, or in alarm systems for detecting temperature-related hazards.
However, the security of these systems has yet to be completely explored,
leaving potential attack surfaces that can be exploited to take control over
critical systems.
In this paper we investigate the reliability of temperature-based control
systems from a security and safety perspective. We show how unexpected
consequences and safety risks can be induced by physical-level attacks on
analog temperature sensing components. For instance, we demonstrate that an
adversary could remotely manipulate the temperature sensor measurements of an
infant incubator to cause potential safety issues, without tampering with the
victim system or triggering automatic temperature alarms. This attack exploits
the unintended rectification effect that can be induced in operational and
instrumentation amplifiers to control the sensor output, tricking the internal
control loop of the victim system to heat up or cool down. Furthermore, we show
how the exploit of this hardware-level vulnerability could affect different
classes of analog sensors that share similar signal conditioning processes.
Our experimental results indicate that conventional defenses commonly
deployed in these systems are not sufficient to mitigate the threat, so we
propose a prototype design of a low-cost anomaly detector for critical
applications to ensure the integrity of temperature sensor signals.Comment: Accepted at the ACM Conference on Computer and Communications
Security (CCS), 201
DolphinAtack: Inaudible Voice Commands
Speech recognition (SR) systems such as Siri or Google Now have become an
increasingly popular human-computer interaction method, and have turned various
systems into voice controllable systems(VCS). Prior work on attacking VCS shows
that the hidden voice commands that are incomprehensible to people can control
the systems. Hidden voice commands, though hidden, are nonetheless audible. In
this work, we design a completely inaudible attack, DolphinAttack, that
modulates voice commands on ultrasonic carriers (e.g., f > 20 kHz) to achieve
inaudibility. By leveraging the nonlinearity of the microphone circuits, the
modulated low frequency audio commands can be successfully demodulated,
recovered, and more importantly interpreted by the speech recognition systems.
We validate DolphinAttack on popular speech recognition systems, including
Siri, Google Now, Samsung S Voice, Huawei HiVoice, Cortana and Alexa. By
injecting a sequence of inaudible voice commands, we show a few
proof-of-concept attacks, which include activating Siri to initiate a FaceTime
call on iPhone, activating Google Now to switch the phone to the airplane mode,
and even manipulating the navigation system in an Audi automobile. We propose
hardware and software defense solutions. We validate that it is feasible to
detect DolphinAttack by classifying the audios using supported vector machine
(SVM), and suggest to re-design voice controllable systems to be resilient to
inaudible voice command attacks.Comment: 15 pages, 17 figure